Goto

Collaborating Authors

 model monitor


Improve governance of your machine learning models with Amazon SageMaker

#artificialintelligence

As companies are increasingly adopting machine learning (ML) for their mainstream enterprise applications, more of their business decisions are influenced by ML models. As a result of this, having simplified access control and enhanced transparency across all your ML models makes it easier to validate that your models are performing well and take action when they are not. In this post, we explore how companies can improve visibility into their models with centralized dashboards and detailed documentation of their models using two new features: SageMaker Model Cards and the SageMaker Model Dashboard. Both these features are available at no additional charge to SageMaker customers. Model governance is a framework that gives systematic visibility into model development, validation, and usage.


Amazon SageMaker Model Monitor: A System for Real-Time Insights into Deployed Machine Learning Models

Nigenda, David, Karnin, Zohar, Zafar, Muhammad Bilal, Ramesha, Raghu, Tan, Alan, Donini, Michele, Kenthapadi, Krishnaram

arXiv.org Artificial Intelligence

With the increasing adoption of machine learning (ML) models and systems in high-stakes settings across different industries, guaranteeing a model's performance after deployment has become crucial. Monitoring models in production is a critical aspect of ensuring their continued performance and reliability. We present Amazon SageMaker Model Monitor, a fully managed service that continuously monitors the quality of machine learning models hosted on Amazon SageMaker. Our system automatically detects data, concept, bias, and feature attribution drift in models in real-time and provides alerts so that model owners can take corrective actions and thereby maintain high quality models. We describe the key requirements obtained from customers, system design and architecture, and methodology for detecting different types of drift. Further, we provide quantitative evaluations followed by use cases, insights, and lessons learned from more than 1.5 years of production deployment.


Deploy shadow ML models in Amazon SageMaker

#artificialintelligence

Amazon SageMaker helps data scientists and developers prepare, build, train, and deploy high-quality machine learning (ML) models quickly by bringing together a broad set of capabilities purpose-built for ML. SageMaker accelerates innovation within your organization by providing purpose-built tools for every step of ML development, including labeling, data preparation, feature engineering, statistical bias detection, AutoML, training, tuning, hosting, explainability, monitoring, and workflow automation. You can use a variety of techniques to deploy new ML models to production, so choosing the right strategy is an important decision. You must weigh the options in terms of the impact of change on the system and on the end users. In this post, we show you how to deploy using a shadow deployment strategy.


AWS SageMaker's new machine learning IDE isn't ready to win over data scientists

#artificialintelligence

AWS SageMaker, the machine learning brand of AWS, announced the release of SageMaker Studio, branded an "IDE for ML," on Tuesday. Machine-learning has been gaining traction and, with its compute-heavy training workloads, could prove a decisive factor in the growing battle over public cloud. So what does this new IDE mean for AWS and the public cloud market? First, the big picture (skip below for the feature by feature analysis of Studio): It's no secret that SageMaker's market share is minuscule (the Information put it around $11 million in July of 2019). SageMaker Studio attempts to solve important pain points for data scientists and machine-learning (ML) developers by streamlining model training and maintenance workloads.


Amazon SageMaker Model Monitor – Fully Managed Automatic Monitoring For Your Machine Learning Models Amazon Web Services

#artificialintelligence

Today, we're extremely happy to announce Amazon SageMaker Model Monitor, a new capability of Amazon SageMaker that automatically monitors machine learning (ML) models in production, and alerts you when data quality issues appear. The first thing I learned when I started working with data is that there is no such thing as paying too much attention to data quality. Raise your hand if you've spent hours hunting down problems caused by unexpected NULL values or by exotic character encodings that somehow ended up in one of your databases. As models are literally built from large amounts of data, it's easy to see why ML practitioners spend so much time caring for their data sets. In particular, they make sure that data samples in the training set (used to train the model) and in the validation set (used to measure its accuracy) have the same statistical properties.


SageMaker Studio makes model building, monitoring easier

#artificialintelligence

AWS launched a host of new tools and capabilities for Amazon SageMaker, AWS' cloud platform for creating and deploying machine learning models; drawing the most notice was Amazon SageMaker Studio, a web-based integrated development platform (IDE) . In addition to SageMaker Studio, the IDE for platform for building, using, and monitoring machine learning models, the other new AWS products aim to make it easier for non-expert developers to create models and to make them more explainable. During a keynote presentation at the AWS re:Invent 2019 conference here Tuesday, AWS CEO Andy Jassy described five other new SageMaker tools: Experiments, Model Monitor, Autopilot, Notebooks, and Debugger. "SageMaker Studio along with SageMaker Experiments, SageMaker Model Monitor, SageMaker Autopilot, and Sagemaker Debugger collectively add lots more lifecycle capabilities for the full ML (machine learning) lifecycle and to support teams," said Mike Gualtieri, an analyst at Forrester. SageMaker Studio, Jassy claimed, is a "fully-integrated development environment for machine learning." The new platform pulls together all of SageMaker's capabilities, along with code, notebooks, and datasets, into one environment.


Safe Reinforcement Learning via Formal Methods: Toward Safe Control Through Proof and Learning

Fulton, Nathan (Carnegie Mellon University) | Platzer, André (Carnegie Mellon University)

AAAI Conferences

Formal verification provides a high degree of confidence in safe system operation, but only if reality matches the verified model. Although a good model will be accurate most of the time, even the best models are incomplete. This is especially true in Cyber-Physical Systems because high-fidelity physical models of systems are expensive to develop and often intractable to verify. Conversely, reinforcement learning-based controllers are lauded for their flexibility in unmodeled environments, but do not provide guarantees of safe operation. This paper presents an approach for provably safe learning that provides the best of both worlds: the exploration and optimization capabilities of learning along with the safety guarantees of formal verification. Our main insight is that formal verification combined with verified runtime monitoring can ensure the safety of a learning agent. Verification results are preserved whenever learning agents limit exploration within the confounds of verified control choices as long as observed reality comports with the model used for off-line verification. When a model violation is detected, the agent abandons efficiency and instead attempts to learn a control strategy that guides the agent to a modeled portion of the state space. We prove that our approach toward incorporating knowledge about safe control into learning systems preserves safety guarantees, and demonstrate that we retain the empirical performance benefits provided by reinforcement learning. We also explore various points in the design space for these justified speculative controllers in a simple model of adaptive cruise control model for autonomous cars.